158 research outputs found

    Multimodal metaphors for generic interaction tasks in virtual environments

    Full text link
    Virtual Reality (VR) Systeme bieten zusätzliche Ein- und Ausgabekanäle für die Interaktion zwischen Mensch und Computer in virtuellen Umgebungen. Solche VR Technologien ermöglichen den Anwendern bessere Einblicke in hochkomplexe Datenmengen, stellen allerdings auch hohe Anforderungen an den Benutzer bezüglich der Fähigkeiten mit virtuellen Objekten zu interagieren. In dieser Arbeit werden sowohl die Entwicklung und Evaluierung neuer multimodaler Interaktionsmetaphern für generische Interaktionsaufgaben in virtuellen Umgebungen vorgestellt und diskutiert. Anhand eines VR Systems wird der Einsatz dieser Konzepte an zwei Fallbeispielen aus den Domänen der 3D-Stadtvisualisierung und seismischen Volumendarstellung aufgezeigt

    Motion In-Betweening with Phase Manifolds

    Full text link
    This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder. Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights. Each generated set of weights then produces a sequence of poses in an autoregressive manner between the current and target state of the character. In addition, to satisfy poses which are manually modified by the animators or where certain end effectors serve as constraints to be reached by the animation, a learned bi-directional control scheme is implemented to satisfy such constraints. The results demonstrate that using phases for motion in-betweening tasks sharpen the interpolated movements, and furthermore stabilizes the learning process. Moreover, using phases for motion in-betweening tasks can also synthesize more challenging movements beyond locomotion behaviors. Additionally, style control is enabled between given target keyframes. Our proposed framework can compete with popular state-of-the-art methods for motion in-betweening in terms of motion quality and generalization, especially in the existence of long transition durations. Our framework contributes to faster prototyping workflows for creating animated character sequences, which is of enormous interest for the game and film industry.Comment: 17 pages, 11 figures, conferenc

    3D User Interfaces for Collaborative Work

    Get PDF

    Immersive Neural Graphics Primitives

    Full text link
    Neural radiance field (NeRF), in particular its extension by instant neural graphics primitives, is a novel rendering method for view synthesis that uses real-world images to build photo-realistic immersive virtual scenes. Despite its potential, research on the combination of NeRF and virtual reality (VR) remains sparse. Currently, there is no integration into typical VR systems available, and the performance and suitability of NeRF implementations for VR have not been evaluated, for instance, for different scene complexities or screen resolutions. In this paper, we present and evaluate a NeRF-based framework that is capable of rendering scenes in immersive VR allowing users to freely move their heads to explore complex real-world scenes. We evaluate our framework by benchmarking three different NeRF scenes concerning their rendering performance at different scene complexities and resolutions. Utilizing super-resolution, our approach can yield a frame rate of 30 frames per second with a resolution of 1280x720 pixels per eye. We discuss potential applications of our framework and provide an open source implementation online.Comment: Submitted to IEEE VR, currently under revie

    Object Manipulation in Virtual Reality Under Increasing Levels of Translational Gain

    Get PDF
    Room-scale Virtual Reality (VR) has become an affordable consumer reality, with applications ranging from entertainment to productivity. However, the limited physical space available for room-scale VR in the typical home or office environment poses a significant problem. To solve this, physical spaces can be extended by amplifying the mapping of physical to virtual movement (translational gain). Although amplified movement has been used since the earliest days of VR, little is known about how it influences reach-based interactions with virtual objects, now a standard feature of consumer VR. Consequently, this paper explores the picking and placing of virtual objects in VR for the first time, with translational gains of between 1x (a one-to-one mapping of a 3.5m*3.5m virtual space to the same sized physical space) and 3x (10.5m*10.5m virtual mapped to 3.5m*3.5m physical). Results show that reaching accuracy is maintained for up to 2x gain, however going beyond this diminishes accuracy and increases simulator sickness and perceived workload. We suggest gain levels of 1.5x to 1.75x can be utilized without compromising the usability of a VR task, significantly expanding the bounds of interactive room-scale VR

    RayCursor: a 3D Pointing Facilitation Technique based on Raycasting

    Get PDF
    International audienceRaycasting is the most common target pointing technique in virtual reality environments. However, performance on small and distant targets is impacted by the accuracy of the pointing device and the user’s motor skills. Current point- ing facilitation techniques are currently only applied in the context of the virtual hand, i.e. for targets within reach. We propose enhancements to Raycasting: filtering the ray, and adding a controllable cursor on the ray to select the near- est target. We describe a series of studies for the design of the visual feedforward, filtering technique, as well as a comparative study between different 3D pointing techniques. Our results show that highlighting the nearest target is one of the most efficient visual feedforward technique. We also show that filtering the ray reduces error rate in a drastic way. Finally we show the benefits of RayCursor compared to Raycasting and another technique from the literature

    Stereo vision and acuity tests within a virtual reality set-up

    Get PDF
    Dankert T, Heil D, Pfeiffer T. Stereo vision and acuity tests within a virtual reality set-up. In: Latoschik ME, Staadt O, Steinicke F, eds. Virtuelle und Erweiterte Realität - 10. Workshop der GI-Fachgruppe VR/AR. Shaker Verlag; 2013: 185-188.The provision of stereo images to facilitate depth perception by stereopsis is one key aspect of many Virtual Reality installations and there are many technical approaches to do so. However, differences in visual capabilities of the user and technical limitations of a specific set-up might restrict the spatial range in which stereopsis can be facilitated. In this paper, we transfer an existent test for stereo vision from the real world to a virtual environment and extend it to measure stereo acuity

    Wind and warmth in virtual reality - requirements and chances

    Get PDF
    Hülsmann F, Mattar N, Fröhlich J, Wachsmuth I. Wind and warmth in virtual reality - requirements and chances. In: Latoschik ME, Staadt O, Steinicke F, eds. Proceedings of the Workshop Virtuelle & Erweiterte Realität 2013. Aachen: Shaker Verlag; 2013: 133-144.Wind and warmth are often ignored in Virtual Reality systems – even though studies suggest that they are able to improve users’ presence as well as task performance for certain challenges. In this work, requirements of a wind and warmth system are analyzed and the hardware setup used to create this kind of sensations is described. According to requirements identified, a system able to create wind and warmth in a CAVE environment is presented. Special challenges as a possible disruptive influence of specialized hardware on user tracking and their solutions are faced. Furthermore an approach of how to integrate wind and warmth in Virtual Reality applications is described and discussed
    • …
    corecore